1,195 research outputs found

    Hash-Tree Anti-Tampering Schemes

    Get PDF
    Procedures that provide detection, location and correction of tampering in documents are known as anti-tampering schemes. In this paper we describe how to construct an anti-tampering scheme using a pre-computed tree of hashes. The main problems of constructing such a scheme are its computational feasibility and its candidate reduction process. We show how to solve both problems by the use of secondary hashing over a tree structure. Finally, we give brief comments on our ongoing work in this area

    IWAVE: Interactive Web-based Algorithm Visualization Environment

    Get PDF
    This report discusses one of the challenges faced in the teaching and learning of introductory computer programming. The demographic of students has changed considerably in recent years, and teaching styles must adapt accordingly to suit the change in learning styles. Some of the issues involved in making these changes are discussed, before introducing a method for calculating the relative difficultly of a concept based on the submission rate and average mark of its exercises. This method was applied to the results of students’ programming exercises throughout a semester to identify one concept area that is particularly problematic - Arrays. A customized visual learning environment for interactive animation of programming code was developed, allowing students to visualize code and the affects of any changes they make. In addition, a deployment wizard was developed to allow a practitioner to integrate the learning environment with their existing learning material with minimal effort. These tools were then used to create a demonstration learning resource targeted towards the concept of Arrays

    Momentum in US Open men’s singles tennis

    Get PDF
    Most of the research in racket sports has focussed on point outcomes rather than point sequences and other events that may trigger positive or negative momentum. Therefore, the purpose of the current investigation was to determine if point outcome in US Open men’s singles tennis matches is associated with (a) the outcomes of the previous one, two or three points and (b) events within previous points such as aces, double faults, winners and errors. A further purpose was to investigate whether the outcomes of service games were significantly associated with the outcomes of the receiving and next serving games that followed. Ninety player performances from 45 US Open men’s singles matches were analysed as a sample and individually. The outcomes of the previous 1 to 3 points within service games had no significant influence on the outcome of the current point (p > 0.291). Where breaks of serve had been achieved despite the server having game points, the player breaking serve was significantly more likely to hold serve in the next game (100% v 74%, p < 0.001). The investigation suggests that momentum effects different players in different ways which has implications for coaching and psychological support for tennis players

    Hash-Tree Anti-Tampering Schemes

    Get PDF
    Procedures that provide detection, location and correction of tampering in documents are known as anti-tampering schemes. In this paper we describe how to construct an anti-tampering scheme using a pre-computed tree of hashes. The main problems of constructing such a scheme are its computational feasibility and its candidate reduction process. We show how to solve both problems by the use of secondary hashing over a tree structure. Finally, we give brief comments on our ongoing work in this area

    Laughing Eyes, Don\u27t You Cry

    Get PDF
    Contains advertisements and/or short musical examples of pieces being sold by publisher.https://digitalcommons.library.umaine.edu/mmb-vp/7023/thumbnail.jp

    Laughing Eyes, Don\u27t You Cry

    Get PDF
    Contains advertisements and/or short musical examples of pieces being sold by publisher.https://digitalcommons.library.umaine.edu/mmb-vp/7023/thumbnail.jp

    The data integrity problem and multi-layered document integrity

    Get PDF
    Data integrity is a fundamental aspect of computer security that has attracted much interest in recent decades. Despite a general consensus for the meaning of the problem, the lack of a formal definition has led to spurious claims such as "tamper proof", "prevent tampering", and "tamper protection", which are all misleading in the absence of a formal definition. Ashman recently proposed a new approach for protecting the integrity of a document that claims the ability to detect, locate, and correct tampering. If determining integrity is only part of the problem, then a more general notion of data integrity is needed. Furthermore, in the presence of a persistent tamperer, the problem is more concerned with maintaining and proving the integrity of data, rather than determining it. This thesis introduces a formal model for the more general notion of data integrity by providing a formal problem semantics for its sub-problems: detection, location, correction, and prevention. The model is used to reason about the structure of the data integrity problem and to prove some fundamental results concerning the security and existence of schemes that attempt to solve these sub-problems. Ashman's original multi-layered document integrity (MLDI) paper [1] is critically evaluated, and several issues are highlighted. These issues are investigated in detail, and a series of algorithms are developed to present the MLDI schemes. Several factors that determine the feasibility of Ashman's approach are identified in order to prove certain theoretical results concerning the efficacy of MLDI schemes

    The data integrity problem and multi-layered document integrity

    Get PDF
    Data integrity is a fundamental aspect of computer security that has attracted much interest in recent decades. Despite a general consensus for the meaning of the problem, the lack of a formal definition has led to spurious claims such as "tamper proof", "prevent tampering", and "tamper protection", which are all misleading in the absence of a formal definition. Ashman recently proposed a new approach for protecting the integrity of a document that claims the ability to detect, locate, and correct tampering. If determining integrity is only part of the problem, then a more general notion of data integrity is needed. Furthermore, in the presence of a persistent tamperer, the problem is more concerned with maintaining and proving the integrity of data, rather than determining it. This thesis introduces a formal model for the more general notion of data integrity by providing a formal problem semantics for its sub-problems: detection, location, correction, and prevention. The model is used to reason about the structure of the data integrity problem and to prove some fundamental results concerning the security and existence of schemes that attempt to solve these sub-problems. Ashman's original multi-layered document integrity (MLDI) paper [1] is critically evaluated, and several issues are highlighted. These issues are investigated in detail, and a series of algorithms are developed to present the MLDI schemes. Several factors that determine the feasibility of Ashman's approach are identified in order to prove certain theoretical results concerning the efficacy of MLDI schemes

    IWAVE: Interactive Web-based Algorithm Visualization Environment

    Get PDF
    This report discusses one of the challenges facedin the teaching and learning of introductory computerprogramming. The demographic of students has changed considerably in recent years, and teaching styles must adapt accordingly to suit the change in learning styles. Some of the issues involved in making these changes are discussed, before introducing a method for calculating the relative difficultly of a concept based on the submission rate and average mark of its exercises. This method was applied to the results of students’ programming exercises throughout a semester to identify one concept area that is particularly problematic - Arrays.A customized visual learning environment for interactive animation of programming code was developed, allowing students to visualize code and the affects of any changes they make. In addition, a deployment wizard was developed to allow a practitioner to integrate the learning environment with their existing learning material with minimal effort. These tools were then used to create a demonstration learning resource targeted towards the concept of Arrays

    Optimising acute stroke pathways through flexible use of bed capacity: a computer modelling study

    Get PDF
    BACKGROUND: Optimising capacity along clinical pathways is essential to avoid severe hospital pressure and help ensure best patient outcomes and financial sustainability. Yet, typical approaches, using only average arrival rate and average lengths of stay, are known to underestimate the number of beds required. This study investigates the extent to which averages-based estimates can be complemented by a robust assessment of additional ‘flex capacity’ requirements, to be used at times of peak demand. METHODS: The setting was a major one million resident healthcare system in England, moving towards a centralised stroke pathway. A computer simulation was developed for modelling patient flow along the proposed stroke pathway, accounting for variability in patient arrivals, lengths of stay, and the time taken for transfer processes. The primary outcome measure was flex capacity utilisation over the simulation period. RESULTS: For the hyper-acute, acute, and rehabilitation units respectively, flex capacities of 45%, 45%, and 36% above the averages-based calculation would be required to ensure that only 1% of stroke presentations find the hyper-acute unit full and have to wait. For each unit some amount of flex capacity would be required approximately 30%, 20%, and 18% of the time respectively. CONCLUSIONS: This study demonstrates the importance of appropriately capturing variability within capacity plans, and provides a practical and economical approach which can complement commonly-used averages-based methods. Results of this study have directly informed the healthcare system’s new configuration of stroke services. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12913-022-08433-0
    corecore